Skip to main content
Scour
Browse
Getting Started
Login
Sign Up
You are offline. Trying to reconnect...
Close
You're currently offline. Some features may not work.
Close
Copied to clipboard
Close
Unable to share or copy to clipboard
Close
🗺️ Region Inference
Memory Safety, Lifetime Analysis, MLKit, Region Types
Filter Results
Timeframe
Fresh
Past Hour
Today
This Week
This Month
Feeds to Scour
Subscribed
All
Scoured
82278
posts in
569.6
ms
🥇Top AI
Papers
of the Week
nlp.elvissaravia.com
·
4h
🌱
Minimal ML
Show HN:
LocalGPT
– A local-first AI assistant in Rust with
persistent
memory
news.ycombinator.com
·
15h
·
Discuss:
Hacker News
🔒
Rust Borrowing
Performance Tip of the Week #62:
Identifying
and reducing memory
bandwidth
needs
abseil.io
·
20h
⚡
Cache-Aware Algorithms
Fastfood
: Approximate Kernel Expansions in
Loglinear
Time
dev.to
·
17h
·
Discuss:
DEV
📐
Succinct Data Structures
Towards Worst-Case
Guarantees
with Scale-Aware
Interpretability
arxiv.org
·
2d
🪜
Recursive Descent
abdimoallim/alloc
: A header-only C allocator library
github.com
·
1h
·
Discuss:
Hacker News
,
r/C_Programming
🧠
Memory Allocators
From
Monolith
to Micro-Brain:
Architecting
Scalable AI Inference in .NET
dev.to
·
23h
·
Discuss:
DEV
🏗️
MLIR
Show HN: Model Training Memory
Simulator
czheo.github.io
·
9h
·
Discuss:
Hacker News
🧠
Memory Hierarchy
Unlocking core memories with
GoldSrc
engine and
CS
1.6 (2025)
danielbrendel.com
·
7h
·
Discuss:
Hacker News
📏
Linear Memory
Accelerate your discovery by
parallelizing
experiments
magellink.com
·
2h
·
Discuss:
Hacker News
🔀
SIMD Programming
From Prediction to
Compilation
: A Manifesto for
Intrinsically
Reliable AI
news.ycombinator.com
·
7h
·
Discuss:
Hacker News
🚂
Error Propagation
A
Normalized
Gaussian
Wasserstein
Distance for Tiny Object Detection
paperium.net
·
6h
·
Discuss:
DEV
🌱
Minimal ML
Creeping
memory
allocation
community.folivora.ai
·
11h
📊
Memory Profilers
How I
squeezed
a
BERT
sentiment analyzer into 1GB RAM on a $5 VPS
mohammedeabdelaziz.github.io
·
1d
·
Discuss:
Hacker News
🦀
MIR Optimization
How we cut
Vertex
AI latency by 35% with
GKE
Inference Gateway
cloud.google.com
·
2d
📊
Profilers
Local Agent
Bench
: Test 11 small LLMs on tool-calling
judgment
, on CPU, no GPU
github.com
·
1d
·
Discuss:
Hacker News
,
r/LocalLLaMA
🖥️
Minimal VMs
Lazy-pulling containers: 65x faster pulls, but 20x
slower
readiness
blog.zmalik.dev
·
1h
·
Discuss:
Hacker News
💾
Zero-Copy
Building a Dynamic
Multilanguage
System Without
Rebuilds
kuldeepmodi.vercel.app
·
4h
·
Discuss:
DEV
🌉
Cross-Language Tools
Understanding LLM Inference
Engines
: Inside
Nano-vLLM
(Part 2)
neutree.ai
·
2d
·
Discuss:
Hacker News
🔄
Subinterpreters
Evolving
our real-time
timeseries
storage again: Built in Rust for performance at scale
datadoghq.com
·
1h
📮
Message Queues
Loading...
Loading more...
Page 2 »
Keyboard Shortcuts
Navigation
Next / previous item
j
/
k
Open post
o
or
Enter
Preview post
v
Post Actions
Love post
a
Like post
l
Dislike post
d
Undo reaction
u
Recommendations
Add interest / feed
Enter
Not interested
x
Go to
Home
g
h
Interests
g
i
Feeds
g
f
Likes
g
l
History
g
y
Changelog
g
c
Settings
g
s
Browse
g
b
Search
/
Pagination
Next page
n
Previous page
p
General
Show this help
?
Submit feedback
!
Close modal / unfocus
Esc
Press
?
anytime to show this help